本研究审查了使用自然语言处理(NLP)模型来评估物品编写者在医疗许可考试中使用的语言模式是否可能包含偏见或陈规定型语言的证据。项目语言选择中的这种类型的偏差对于医疗许可评估中的物品可能对物品特别有影响,因为它可能对内容有效性构成威胁和测试分数有效性证据的可靠性。据我们所知,这是使用机器学习(ML)和NLP的第一次尝试探索大型物品银行的语言偏见。使用培训的预测算法在类似物品茎的集群上,我们证明我们的方法可用于审查大型物品银行,用于临床科学患者中的潜在偏见语言或陈规定型患者特征。该发现可以指导开发用于解决测试项目中发现的陈规定型语言模式的方法,并在需要时能够有效地更新这些项目,以反映当代规范,从而提高了支持测试评分的有效性的证据。
translated by 谷歌翻译
Migraine is a high-prevalence and disabling neurological disorder. However, information migraine management in real-world settings could be limited to traditional health information sources. In this paper, we (i) verify that there is substantial migraine-related chatter available on social media (Twitter and Reddit), self-reported by migraine sufferers; (ii) develop a platform-independent text classification system for automatically detecting self-reported migraine-related posts, and (iii) conduct analyses of the self-reported posts to assess the utility of social media for studying this problem. We manually annotated 5750 Twitter posts and 302 Reddit posts. Our system achieved an F1 score of 0.90 on Twitter and 0.93 on Reddit. Analysis of information posted by our 'migraine cohort' revealed the presence of a plethora of relevant information about migraine therapies and patient sentiments associated with them. Our study forms the foundation for conducting an in-depth analysis of migraine-related information using social media data.
translated by 谷歌翻译
Multi-Task Learning (MTL) has shown its importance at user products for fast training, data efficiency, reduced overfitting etc. MTL achieves it by sharing the network parameters and training a network for multiple tasks simultaneously. However, MTL does not provide the solution, if each task needs training from a different dataset. In order to solve the stated problem, we have proposed an architecture named TreeDNN along with it's training methodology. TreeDNN helps in training the model with multiple datasets simultaneously, where each branch of the tree may need a different training dataset. We have shown in the results that TreeDNN provides competitive performance with the advantage of reduced ROM requirement for parameter storage and increased responsiveness of the system by loading only specific branch at inference time.
translated by 谷歌翻译
Predicting the political polarity of news headlines is a challenging task that becomes even more challenging in a multilingual setting with low-resource languages. To deal with this, we propose to utilise the Inferential Commonsense Knowledge via a Translate-Retrieve-Translate strategy to introduce a learning framework. To begin with, we use the method of translation and retrieval to acquire the inferential knowledge in the target language. We then employ an attention mechanism to emphasise important inferences. We finally integrate the attended inferences into a multilingual pre-trained language model for the task of bias prediction. To evaluate the effectiveness of our framework, we present a dataset of over 62.6K multilingual news headlines in five European languages annotated with their respective political polarities. We evaluate several state-of-the-art multilingual pre-trained language models since their performance tends to vary across languages (low/high resource). Evaluation results demonstrate that our proposed framework is effective regardless of the models employed. Overall, the best performing model trained with only headlines show 0.90 accuracy and F1, and 0.83 jaccard score. With attended knowledge in our framework, the same model show an increase in 2.2% accuracy and F1, and 3.6% jaccard score. Extending our experiments to individual languages reveals that the models we analyze for Slovenian perform significantly worse than other languages in our dataset. To investigate this, we assess the effect of translation quality on prediction performance. It indicates that the disparity in performance is most likely due to poor translation quality. We release our dataset and scripts at: https://github.com/Swati17293/KG-Multi-Bias for future research. Our framework has the potential to benefit journalists, social scientists, news producers, and consumers.
translated by 谷歌翻译
Agriculture is at the heart of the solution to achieve sustainability in feeding the world population, but advancing our understanding on how agricultural output responds to climatic variability is still needed. Precision Agriculture (PA), which is a management strategy that uses technology such as remote sensing, Geographical Information System (GIS), and machine learning for decision making in the field, has emerged as a promising approach to enhance crop production, increase yield, and reduce water and nutrient losses and environmental impacts. In this context, multiple models to predict agricultural phenotypes, such as crop yield, from genomics (G), environment (E), weather and soil, and field management practices (M) have been developed. These models have traditionally been based on mechanistic or statistical approaches. However, AI approaches are intrinsically well-suited to model complex interactions and have more recently been developed, outperforming classical methods. Here, we present a Natural Language Processing (NLP)-based neural network architecture to process the G, E and M inputs and their interactions. We show that by modeling DNA as natural language, our approach performs better than previous approaches when tested for new environments and similarly to other approaches for unseen seed varieties.
translated by 谷歌翻译
Measuring and monitoring soil organic carbon is critical for agricultural productivity and for addressing critical environmental problems. Soil organic carbon not only enriches nutrition in soil, but also has a gamut of co-benefits such as improving water storage and limiting physical erosion. Despite a litany of work in soil organic carbon estimation, current approaches do not generalize well across soil conditions and management practices. We empirically show that explicit modeling of cause-and-effect relationships among the soil processes improves the out-of-distribution generalizability of prediction models. We provide a comparative analysis of soil organic carbon estimation models where the skeleton is estimated using causal discovery methods. Our framework provide an average improvement of 81% in test mean squared error and 52% in test mean absolute error.
translated by 谷歌翻译
本文研究了与可解释的AI(XAI)实践有关的两个不同但相关的问题。机器学习(ML)在金融服务中越来越重要,例如预批准,信用承销,投资以及各种前端和后端活动。机器学习可以自动检测培训数据中的非线性和相互作用,从而促进更快,更准确的信用决策。但是,机器学习模型是不透明的,难以解释,这是建立可靠技术所需的关键要素。该研究比较了各种机器学习模型,包括单个分类器(逻辑回归,决策树,LDA,QDA),异质集合(Adaboost,随机森林)和顺序神经网络。结果表明,整体分类器和神经网络的表现优于表现。此外,使用基于美国P2P贷款平台Lending Club提供的开放式访问数据集评估了两种先进的事后不可解释能力 - 石灰和外形来评估基于ML的信用评分模型。对于这项研究,我们还使用机器学习算法来开发新的投资模型,并探索可以最大化盈利能力同时最大程度地降低风险的投资组合策略。
translated by 谷歌翻译
在线广告最近已发展成为一个竞争激烈且复杂的数十亿美元行业,广告商在大型和高频上竞标广告插槽。这导致对有效的“自动招标”算法的需求日益增长,这些算法确定了传入查询的投标,以最大程度地提高广告商的目标,但受其指定的约束。这项工作探讨了在日益流行的约束下,为单个价值最大化广告商提供有效的在线算法:返回式增长(ROS)。相对于最佳算法,我们对遗憾进行了量化效率,该算法知道所有查询所有查询都是先验的。我们贡献了一种简单的在线算法,该算法在期望中实现了近乎最佳的遗憾,同时始终尊重指定的ROS约束,当查询的输入顺序为i.i.d.来自某些分布的样本。我们还将结果与Balseiro,Lu和Mirrokni [BLM20]的先前工作相结合,以实现近乎最佳的遗憾,同时尊重ROS和固定的预算限制。我们的算法遵循原始的二重式框架,并使用在线镜像下降(OMD)进行双重更新。但是,我们需要使用非典型的OMD设置,因此需要使用OMD的经典低rebret保证,该保证是用于在线学习中的对抗性环境的,不再存在。尽管如此,在我们的情况下,在更普遍的情况下,在算法设计中应用低纤维动力学的情况下,OMD遇到的梯度可能远非对抗性,但受我们的算法选择的影响。我们利用这一关键见解来显示我们的OMD设置在我们的算法领域中造成了低落的遗憾。
translated by 谷歌翻译
机器学习中的许多基本问题可以通过convex程序\ [\ min _ {\ theta \ in r^d} \ sum_ {i = 1}^{n} f_ {i}(\ theta),\]每个$ f_i $都是一个凸,Lipschitz函数在$ \ theta $的$ d_i $坐标的子集中支持。以随机梯度下降为例,解决此问题的一种常见方法涉及在每次迭代时对一个$ f_i $术语进行采样以取得进展。这种方法至关重要地依赖于$ f_i $的均匀性概念,该概念正式通过其状况编号捕获。在这项工作中,我们给出了一种将上述凸公式最小化为$ \ epsilon $ -Accuracy in $ \ widetilde {o}(\ sum_ {i = 1}^n d_i \ log(1 /\ epsilon)$计算,没有关于条件号的假设。以前的最佳算法独立于条件编号是标准切割平面方法,它需要$ o(nd \ log(1/\ epsilon))$渐变计算。作为推论,我们改善了Axiotis等人的评估甲骨文的复杂性,可分解性下的最小化。 (ICML 2021)。我们的主要技术贡献是一种自适应程序,可以通过切割平面和内点方法的新型组合在每次迭代中选择$ f_i $项。
translated by 谷歌翻译
代表性相似性分析是一种来自认知神经科学的方法,有助于比较来自两个不同数据源的表示。在本文中,我们建议使用代表性分析来探测代码语言模型中的语义基础。我们通过使用IBM Codenet数据集中的数据来探究Codebert模型的语义接地。通过我们的实验,我们表明当前的训练方法不会在代码的语言模型中诱导语义基础,而是专注于优化基于形式的模式。我们还表明,即使在语义相关任务上进行了一些微调,也会大大增加Codebert的语义基础。我们对Codebert模型的输入方式的消融表明,在单峰输入(仅代码)上使用双峰输入(代码和自然语言)(仅代码)可以在语义微调过程中提供更好的语义接地和样本效率。最后,我们在代码中使用语义扰动的实验表明,Codebert能够牢固地区分语义正确和不正确的代码。
translated by 谷歌翻译